Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Video-based person re-identification method based on graph convolution network and self-attention graph pooling
Yingmao YAO, Xiaoyan JIANG
Journal of Computer Applications    2023, 43 (3): 728-735.   DOI: 10.11772/j.issn.1001-9081.2022010034
Abstract364)   HTML10)    PDF (2665KB)(193)       Save

Aiming at the bad effect of video person re-identification caused by factors such as occlusion, spatial misalignment and background clutter in cross-camera network videos, a video-based person re-identification method based on Graph Convolutional Network (GCN) and Self-Attention Graph Pooling (SAGP) was proposed. Firstly, the correlation information of different regions between frames in the video was mined through the patch relation graph modeling.In order to alleviate the problems such as occlusion and misalignment, the region features in the frame-by-frame images were optimized by using GCN. Then, the regions with low contribution to person features were removed by SAGP mechanism to avoid the interference of background clutter regions. Finally, a weighted loss function strategy was proposed, the center loss was used to optimize the classification learning results, and Online soft mining and Class-aware attention Loss (OCL) were used to solve the problem that the available samples were not fully used in the process of hard sample mining. Experimental results on MARS dataset show that compared with the sub-optimal Attribute-aware Identity-hard Triplet Loss (AITL), the proposed method has the mean Average Precision (mAP) and Rank-1 increased by 1.3 percentage points and 2.0 percentage points. The proposed method can better utilize the spatial-temporal information in the video to extract more discriminative person features, and improve the effect of person re-identification tasks.

Table and Figures | Reference | Related Articles | Metrics
Waterweed image segmentation method based on improved U-Net
Qiwen WU, Jianhua WANG, Xiang ZHENG, Ju FENG, Hongyan JIANG, Yubo WANG
Journal of Computer Applications    2022, 42 (10): 3177-3183.   DOI: 10.11772/j.issn.1001-9081.2021091614
Abstract344)   HTML17)    PDF (2407KB)(89)       Save

During the operation of the Unmanned Surface Vehicles (USVs), the propellers are easily gotten entangled by waterweeds, which is a problem encountered by the whole industry. Concerning the global distribution, dispersivity, and complexity of the edge and texture of waterweeds in the water surface images, the U-Net was improved and used to classify all pixels in the image, in order to reduce the feature loss of the network, and enhance the extraction of both global and local features, thereby improving the overall segmentation performance. Firstly, the image data of waterweeds in multiple locations and multiple periods were collected, and a comprehensive dataset of waterweeds for semantic segmentation was built. Secondly, three scales of input images were introduced into the network to enable full extraction of the features via the network, and three loss functions for the upsampled images were introduced to balance the overall loss brought by the three different scales of input images. In addition, a hybrid attention module, including the dilated convolution branch and the channel attention enhancement branch, was proposed and introduced to the network. Finally, the proposed network was verified on the newly built waterweed dataset. Experimental results show that the accuracy, mean Intersection over Union (mIoU) and mean Pixel Accuracy (mPA) values of the proposed method can reach 96.8%, 91.22% and 95.29%, respectively, which are improved by 4.62 percentage points, 3.87 percentage points and 3.12 percentage points compared with those of U-Net (VGG16) segmentation method. The proposed method can be applied to unmanned surface vehicles for detection of waterweeds, and perform the corresponding path planning to realize waterweed avoidance.

Table and Figures | Reference | Related Articles | Metrics
Classification algorithm based on undersampling and cost-sensitiveness for unbalanced data
WANG Junhong, YAN Jiarong
Journal of Computer Applications    2021, 41 (1): 48-52.   DOI: 10.11772/j.issn.1001-9081.2020060878
Abstract383)      PDF (752KB)(663)       Save
Focusing on the problem that the minority class in the unbalanced dataset has low prediction accuracy by traditional classifiers, an unbalanced data classification algorithm based on undersampling and cost-sensitiveness, called USCBoost (UnderSamples and Cost-sensitive Boosting), was proposed. Firstly, the majority class samples were sorted from large weight sample to small weight sample before base classifiers being trained by the AdaBoost (Adaptive Boosting) algorithm in each iteration, the majority class samples with the number equal to the number of minority class samples were selected according to sample weights, and the weights of majority class samples after sampling were normalized and a temporary training set was formed by these majority class samples and the minority class samples to train base classifiers. Secondly, in the weight update stage, higher misclassification cost was given to the minority class, which made the weights of minority class samples increase faster and the weights of majority class samples increase more slowly. On ten sets of UCI datasets, USCBoost was compared with AdaBoost, AdaCost (Cost-sensitive AdaBoosting), and RUSBoost (Random Under-Sampling Boosting). Experimental results show that USCBoost has the highest evaluation indexes on six sets and nine sets of datasets under the F1-measure and G-mean criteria respectively. The proposed algorithm has better classification performance on unbalanced data.
Reference | Related Articles | Metrics
Weakly illuminated image enhancement algorithm based on convolutional neural network
CHENG Yu, DENG Dexiang, YAN Jia, FAN Ci'en
Journal of Computer Applications    2019, 39 (4): 1162-1169.   DOI: 10.11772/j.issn.1001-9081.2018091979
Abstract2003)      PDF (1448KB)(907)       Save
Existing weakly illuminated image enhancement algorithms are strongly dependent on Retinex model and require manual adjustment of parameters. To solve those problems, an algorithm based on Convolutional Neural Network (CNN) was proposed to enhance weakly illuminated image. Firstly, four image enhancement techniques were used to process weakly illuminated image to obtain four derivative images, including contrast limited adaptive histogram equalization derivative image, Gamma correction derivative image, logarithmic correction derivative image and bright channel enhancement derivative image. Then, the weakly illuminated image and its four derivative images were input into CNN. Finally, the enhanced image was output after activation by CNN. The proposed algorithm can directly map the weakly illuminated image to the normal illuminated image in end-to-end way without estimating the illumination map or reflection map according to Retinex model nor adjusting any parameters. The proposed algorithm was compared with Naturalness Preserved Enhancement Algorithm for non-uniform illumination images (NPEA), Low-light image enhancement via Illumination Map Estimation (LIME), LightenNet (LNET), etc. In the experiment on synthetic weakly illuminated images, the average Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity index (SSIM) metrics of the proposed algorithm are superior to comparison algorithms. In the real weakly illuminated images experiment, the average Natural Image Quality Evaluator (NIQE) and entropy metric of the proposed algorithm are the best of all comparison algorithms, and the average contrast gain metric ranks the second among all algorithms. Experimental results show that compared with comparison algorithms, the proposed algorithm has better robustness, and the details of the images enhanced by the proposed algorithm are richer, the contrast is higher, and the visual effect and image quality are better.
Reference | Related Articles | Metrics
Image retrieval algorithm based on saliency semantic region weighting
CHEN Hongyu, DENG Dexiang, YAN Jia, FAN Ci'en
Journal of Computer Applications    2019, 39 (1): 136-142.   DOI: 10.11772/j.issn.1001-9081.2018051150
Abstract574)      PDF (1175KB)(325)       Save
For image instance retrieval in the field of computational vision, a semantic region weighted aggregation method based on significance guidance of deep convolution features was proposed. Firstly, a tensor after full convolutional layer of deep convolutional network was extracted as deep feature. A feature saliency map was obtained by using Inverse Document Frequency (IDF) method to weight deep feature, and then it was used as a constraint to guide deep feature channel importance ordering to extract different special semantic region deep feature, which excluded interference from background and noise information. Finally, global average pooling was used to perform feature aggregation, and global feature representation of image was obtained by using Principal Component Analysis (PCA) to reduce the dimension and whitening for distance metric retrieval. The experimental results show that the proposed image retrieval algorithm based on significant semantic region weighting is more accurate and robust than the current mainstream algorithms on four standard databases, because the image feature vector extracted by the proposed algorithm is richer and more discerning.
Reference | Related Articles | Metrics
Natural scene text detection based on maximally stable extremal region in color space
FAN Yihua, DENG Dexiang, YAN Jia
Journal of Computer Applications    2018, 38 (1): 264-269.   DOI: 10.11772/j.issn.1001-9081.2017061389
Abstract390)      PDF (1191KB)(327)       Save
To solve the problem that the text regions can not be extracted well in low contrast images by traditional Maximally Stable Extremal Regions (MSER) method, a novel scene text detection method based on edge enhancement was proposed. Firstly, the MSER method was effectively improved by Histogram of Oriented Gradients (HOG), the robustness of MSER method was enhanced to low contrast images and MSER was applied in color space. Secondly, the Bayesian model was used for the classification of characters, three features with translation and rotation invariance including stroke width, edge gradient direction and inflexion point were used to delete non-character regions. Finally, the characters were grouped into text lines by geometric characteristics of characters. The proposed algorithm's performance on standard benchmarks, such as International Conference on Document Analysis and Recognition (ICDAR) 2003 and ICDAR 2013, was evaluated. The experimental results demonstrate that MSER based on edge enhancement in color space can correctly extract text regions from complex and low contrast images. The Bayesian model based classification method can detect characters from small sample set with high recall. Compared with traditional MSER based method of text detection, the proposed algorithm can improve the detection rate and real-time performance of the system.
Reference | Related Articles | Metrics
Saliency detection based on guided Boosting method
YE Zitong, ZOU Lian, YAN Jia, FAN Ci'en
Journal of Computer Applications    2017, 37 (9): 2652-2658.   DOI: 10.11772/j.issn.1001-9081.2017.09.2652
Abstract497)      PDF (1249KB)(524)       Save
Aiming at the problem of impure simplicity and too simple feature extraction of training samples in the existing saliency detection model based on guided learning, an improved algorithm based on Boosting was proposed to detect saliency, which improve the accuracy of the training sample set and improve the way of feature extraction to achieve the improvement of learning effect. Firstly, the coarse sample map was generated from the bottom-up model for saliency detection, and the coarse sample map was quickly and effectively optimized by the cellular automata to establish the reliable Boosting samples. The training samples were set up to mark the original images. Then, the color and texture features were extracted from the training set. Finally, Support Vector Machine (SVM) weak classifiers with different feature and different kernel were used to generate a strong classifier based on Boosting, and the foreground and background of each pixel of the image was classified, and a saliency map was obtained. On the ASD database and the SED1 database, the experimental results show that the proposed algorithm can produce complete clear and salient maps for complex and simple images, with good AUC (Area Under Curve) evaluation value for accuracy-recall curve. Because of its accuracy, the proposed algorithm can be applied in pre-processing stage of computer vision.
Reference | Related Articles | Metrics
Self-adaptive group based sparse representation for image inpainting
LIN Jinyong, DENG Dexiang, YAN Jia, LIN Xiaoying
Journal of Computer Applications    2017, 37 (4): 1169-1173.   DOI: 10.11772/j.issn.1001-9081.2017.04.1169
Abstract941)      PDF (827KB)(815)       Save
Focusing on the problem of object structure discontinuity and poor texture detail occurred in image inpainting, an inpainting algorithm based on self-adaptive group was proposed. Different from the traditional method which uses a single image block or a fixed number of image blocks as the repair unit, the proposed algorithm adaptively selects different number of similar image blocks according to the different characteristics of the texture area to construct self-adaptive group. A self-adaptive dictionary as well as a sparse representation model was established in the domain of self-adaptive group. Finally, the target cost function was solved by Split Bregman Iteration. The experimental results show that compared with the patch-based inpainting algorithm and Group-based Sparse Representation (GSR) algorithm, the Peak Signal-to-Noise Ratio (PSNR) and the Structural SIMilarity (SSIM) index are improved by 0. 94-4.34 dB and 0. 0069-0.0345 respectively; meanwhile, the proposed approach can obtain image inpainting speed-up of 2.51 and 3.32 respectively.
Reference | Related Articles | Metrics
Objective quality assessment for color-to-gray images based on visual similarity
WANG Man, YAN Jia, WU Minyuan
Journal of Computer Applications    2017, 37 (10): 2926-2931.   DOI: 10.11772/j.issn.1001-9081.2017.10.2926
Abstract597)      PDF (1158KB)(485)       Save
The Color-to-Gray (C2G) image quality evaluation algorithm based on structural similarity does not make full use of the gradient feature of the image, and the contrast similarity feature ignores the consistency of the continuous color blocks of the image, thus leading to a large difference between the algorithm and the subjective judgment of human vision. A C2G image quality evaluation algorithm named C2G Visual Similarity Index Measurement (C2G-VSIM) was proposed based on Human Visual System (HVS). In this algorithm, the color image was regarded as the reference image, the corresponding decolorization image obtained by different algorithms was regarded as the test image. By applying color space conversion and Gaussian filtering to these reference and test images, taking full account of the characteristics of image brightness similarity and structual similarity, a new color consistency contrast feature was introduced to help C2G-VSIM to capture the global color contrast feature; then the gradient amplitude feature was also introduced into C2G-VSIM to improve the sensitivity of the algorithm to the image gradient feature. Finally, by combining those above features, a new imgage quality evaluation operator named C2G-VSIM was obtained. Experimental results on Cadík's dataset showed that in terms of accuracy and preference evaluation, the Spearman Rank Order Correlation Coefficient (SROCC) between C2G-VSIM and subjective assessment of human visuality was 0.8155 and 0.7634, respectively, the accuracy was improved significantly without increasing the time consuming compared to C2G Structure Similarity Index Measurement (C2G-SSIM). The proposed algorithm has high consistency compared to human visuality, as well as simple calculation, which can effectively and automatically evaluate decolorization images in actual project with large scale.
Reference | Related Articles | Metrics
Geographic routing algorithm based on anchor nodes in vehicular network
ZHENG Zheng LI Yunfei YAN Jianfeng ZHAO Yongjie
Journal of Computer Applications    2013, 33 (12): 3460-3464.  
Abstract590)      PDF (775KB)(374)       Save
Vehicular network has the following characteristics such as nodes moving fast, topology changing rapidly. The direct use of Global Positioning System (GPS) devices causes large positioning error and low routing connectivity rate. Therefore, the packet delivery rate of the existing location-based routing algorithm is not high enough to provide reliable routing. A geographic routing algorithm based on anchor node in vehicle networks named Geographic Routing based on Anchor Nodes (GRAN) was proposed. Using street lamps as anchor nodes, a vehicle could locate itself through the anchor nodes. Combined with the road gateway and the central data, GRAN established a hierarchical routing structure, thus removing the steps of route discovery and the whole network broadcast. Thus, the routing overhead was reduced and the routing efficiency and the packet delivery rate were improved. By using the NS-2 software and selecting a realistic urban scene, a simulation was conducted on Greedy Perimeter Stateless Routing (GPSR), Graphic Source Routing (GSR) and GRAN. The experimental results show that GRAN can provide a lower average delay, higher packet delivery ratio and throughput at a lower load, compared with several typical location-based routing protocols.
Related Articles | Metrics
Design and implementation of distributed retrieval system for electronic products information
ZHANG YuanYuan ZHANG Qinyan JIANG Guanfu
Journal of Computer Applications    2013, 33 (04): 1026-1030.   DOI: 10.3724/SP.J.1087.2013.01026
Abstract704)      PDF (851KB)(516)       Save
In order to obtain the useful information that can satisfy the user requirements, this paper proposed a distributed information retrieval system based on Hadoop and Lucene handling the Web electronic products information retrieval. In order to improve the retrieval efficiency, using the Map and Reduce method of Hadoop technology implemented the storage of distributed index files and using Lucene technology implemented the file access of distributed index files. At the same time, it also proposed an improved method at fine grain retrieval level, which reduced the index building time. The experiment demonstrates that our distributed information retrieval system has a good retrieval performance for Web electronic products information.
Reference | Related Articles | Metrics
Satellite cloud image fusion based on regional feature with nonsubsampled contourlet transform
WANG Da,BI Shuo-ben,WANG Bi-qiang,YAN Jian
Journal of Computer Applications    2012, 32 (09): 2585-2587.   DOI: 10.3724/SP.J.1087.2012.02585
Abstract1017)      PDF (512KB)(571)       Save
The fusion of different satellite cloud images can provide more comprehensive information for surveillance and early warning of disastrous weather. A satellite cloud image fusion algorithm based on regional feature with NonSubsampled Contourlet Transform (NSCT) was proposed. Firstly, the source images were decomposed at multi-scale and multi-direction by NSCT. Then the self-adaptive fusion rule based on regional correlation coefficient and regional energy was used to fuse the low frequency coefficients, and the fusion rule of regional variance in combination with weighting was used for the fusion of the high frequency coefficients. Finally, the fused image was obtained by performing the inverse NSCT on the fused coefficients. The experimental results illustrate that while the texture and edge feature of the fused cloud image are enriched, the infrared information are preserved as much as possible and the proposed algorithm acquires better fusion result.
Reference | Related Articles | Metrics
Trusted attestation of measurement action information base
YAN Jian-hong PENG Xin-guang
Journal of Computer Applications    2012, 32 (01): 56-59.   DOI: 10.3724/SP.J.1087.2012.00056
Abstract1053)      PDF (614KB)(606)       Save
To improve the flexibility and efficiency of remote attestation, behavior dynamic attestation was proposed based on Merkle Hash tree. The process of creating AM_AIB tree was designed. The client measured and calculated current root Hash value which was signed by Trusted Platform Module (TPM), and then transmitted it to server-side for certification. If it was consistent with Hash value of server-side, the behavior was supposed to be credible. The model of Attestation Measurement Action Information Base (AM_AIB) could also be designed in different granularity according to the characteristics of behavior. The experimental results show the proposed method can improve the time performance and protect the privacy of platform. It is flexible and it also can overcome the static feature based on attribute verification and ensure that the platform application software runs credibly.
Reference | Related Articles | Metrics
TCP finite state machine and protocol parse applied in removal of false alerts
SHUAI Chun-yan JIANG Jian-hui OU-YANG Xin
Journal of Computer Applications    2011, 31 (05): 1271-1275.   DOI: 10.3724/SP.J.1087.2011.01271
Abstract1573)            Save
Concerning the enormous alerts produced by Intrusion Detection System (IDS), a method based on protocol parse and Transfer Control Protocol (TCP) Finite State Machine (FSM) model was proposed to remove the false alerts. To the alerts produced by connectionless request/response protocol, the method made judgement through the analysis of the attack features of the request packets and return status code of response packets; to the alerts produced by the TCP, the paper parsed the packets and built up TCP FSM model to make judgement whether the series packets came from the same TCP connection, whether the TCP connection included attack sequences to remove the false alerts. Lastly the experiments made on DARPA 2000 datasets show that the proposed method can reduce the false alert more than 59.47% on average, and the alerts recognition rate of the TCP and the request/response protocol reaches 76.67%. This method is simple and efficient which depends on the attack features database of IDS, and can be implemented on line by plug-in.
Related Articles | Metrics
Optimized hand-over scheme based on SIP for stratospheric telecommunication platform
Yan JIANG
Journal of Computer Applications   
Abstract1975)      PDF (768KB)(1076)       Save
In the Stratospheric Telecommunication Platform, when the user equipment (UE) mobiles to another Platform, his IP address will change. If the UE changes access technology he will automatically get a new IP-address. After this reassignment of the IP-address the UE has to re-register to the IMS and send new invites to all corresponding nodes before the sessions can continue. This will potentially introduce a long delay of ongoing sessions. In this paper, a solution was introduced to reduce the handover delay by sharing the registration information and call states of the UE. The full register and invite flows were not necessary in that case, since the servers in the IMS already had state information about the UE and sessions from the context transfer. The benefit with respect to reduced hand-over delay was theoretically analyzed in the quantity of signaling packets and the delayed of rebuilding the communication.An OPNET simulation was carried out to verify the solution.
Related Articles | Metrics